31 research outputs found
Contextualized B2B Registries
Abstract. Service discovery is a fundamental concept underpinning the move towards dynamic service-oriented business partnerships. The business process for integrating service discovery and underlying registry technologies into business relationships, procurement and project management functions has not been examined and hence existing Web Service registries lack capabilities required by business today. In this paper we present a novel contextualized B2B registry that supports dynamic registration and discovery of resources within management contexts to ensure that the search space is constrained to the scope of authorized and legitimate resources only. We describe how the registry has been deployed in three case studies from important economic sectors (aerospace, automotive, pharmaceutical) showing how contextualized discovery can support distributed product development processes
B2B Infrastructures in the Process of Drug Discovery and Healthcare
In this paper we describe a demonstration of an innovative B2B infrastructure which can be used to support collaborations in the pharmaceutical industry to achieve the drug discovery goal. Based on experience gained in a wide range of collaborative projects in the areas of grid technology, semantics and data management we show future work and new topics in B2B infrastructures which arise when considering the use of patient records in the process of drug discovery and in healthcare applications
Automatic annotation of bioinformatics workflows with biomedical ontologies
Legacy scientific workflows, and the services within them, often present
scarce and unstructured (i.e. textual) descriptions. This makes it difficult to
find, share and reuse them, thus dramatically reducing their value to the
community. This paper presents an approach to annotating workflows and their
subcomponents with ontology terms, in an attempt to describe these artifacts in
a structured way. Despite a dearth of even textual descriptions, we
automatically annotated 530 myExperiment bioinformatics-related workflows,
including more than 2600 workflow-associated services, with relevant
ontological terms. Quantitative evaluation of the Information Content of these
terms suggests that, in cases where annotation was possible at all, the
annotation quality was comparable to manually curated bioinformatics resources.Comment: 6th International Symposium on Leveraging Applications (ISoLA 2014
conference), 15 pages, 4 figure
Interoperable multimedia metadata through similarity-based semantic web service discovery
The increasing availability of multimedia (MM) resources, Web services as well as content, on the Web raises the need to automatically discover and process resources out of distributed repositories. However, the heterogeneity of applied metadata schemas and vocabularies â ranging from XML-based schemas such as MPEG-7 to formal knowledge representation approaches â raises interoperability problems. To enable MM metadata interoperability by means of automated similarity-computation, we propose a hybrid representation approach which combines symbolic MM metadata representations with a grounding in so-called Conceptual Spaces (CS). In that, we enable automatic computation of similarities across distinct metadata vocabularies and schemas in terms of spatial distances in shared CS. Moreover, such a vector-based approach is particularly well suited to represent MM metadata, given that a majority of MM parameters is provided in terms of quantified metrics. To prove the feasibility of our approach, we provide a prototypical implementation facilitating similarity-based discovery of publicly available MM services, aiming at federated MM content retrieval out of heterogeneous repositories
High-Throughput Screening for Modulators of CFTR Activity Based on Genetically Engineered Cystic Fibrosis Disease-Specific iPSCs
Organotypic culture systems from disease-specific induced pluripotent stem cells (iPSCs) exhibit obvious advantages compared with
immortalized cell lines and primary cell cultures, but implementation of iPSC-based high-throughput (HT) assays is still technically challenging. Here, we demonstrate the development and conduction of an organotypic HT Cl/I exchange assay using cystic fibrosis (CF)
disease-specific iPSCs. The introduction of a halide-sensitive YFP variant enabled automated quantitative measurement of Cystic Fibrosis
Transmembrane Conductance Regulator (CFTR) function in iPSC-derived intestinal epithelia. CFTR function was partially rescued by
treatment with VX-770 and VX-809, and seamless gene correction of the p.Phe508del mutation resulted in full restoration of CFTR function. The identification of a series of validated primary hits that improve the function of p.Phe508del CFTR from a library of 42,500
chemical compounds demonstrates that the advantages of complex iPSC-derived culture systems for disease modeling can also be utilized
for drug screening in a true HT format
A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines
<p>Abstract</p> <p>Background</p> <p>Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts.</p> <p>Results</p> <p>To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (<it>e.g</it>., for biomolecular sequences, alignments, structures) and functionality (<it>e.g</it>., to parse/write standard file formats).</p> <p>Conclusions</p> <p>PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at <url>http://muralab.org/PaPy</url>, and includes extensive documentation and annotated usage examples.</p
COBIDS: A component-based framework for the integration of geo-applications in a distributed GI-infrastructure
The use of the existing distributed data and compute services via GI-Infrastructures is becoming more and more important for the success of numerous geo-scientific projects. In this context, two major problems generally arise. First, the necessary GI-services typically have heterogeneous interfaces and don't share a common data model. Here techniques which provide integrated access are needed. The second problem is in the coupling of existing local applications (i.e. tools for simulation, geometric modelling or visualisation) with the distributed service infrastructure. The COBIDS Framework, which is discussed in this paper, offers solutions to both of these problems. In particular, our focus in this paper is on the COBIDS techniques for connecting local applications to distributed services